5 research outputs found
Data Fine-tuning
In real-world applications, commercial off-the-shelf systems are utilized for
performing automated facial analysis including face recognition, emotion
recognition, and attribute prediction. However, a majority of these commercial
systems act as black boxes due to the inaccessibility of the model parameters
which makes it challenging to fine-tune the models for specific applications.
Stimulated by the advances in adversarial perturbations, this research proposes
the concept of Data Fine-tuning to improve the classification accuracy of a
given model without changing the parameters of the model. This is accomplished
by modeling it as data (image) perturbation problem. A small amount of "noise"
is added to the input with the objective of minimizing the classification loss
without affecting the (visual) appearance. Experiments performed on three
publicly available datasets LFW, CelebA, and MUCT, demonstrate the
effectiveness of the proposed concept.Comment: Accepted in AAAI 201
Are Face Detection Models Biased?
The presence of bias in deep models leads to unfair outcomes for certain
demographic subgroups. Research in bias focuses primarily on facial recognition
and attribute prediction with scarce emphasis on face detection. Existing
studies consider face detection as binary classification into 'face' and
'non-face' classes. In this work, we investigate possible bias in the domain of
face detection through facial region localization which is currently
unexplored. Since facial region localization is an essential task for all face
recognition pipelines, it is imperative to analyze the presence of such bias in
popular deep models. Most existing face detection datasets lack suitable
annotation for such analysis. Therefore, we web-curate the Fair Face
Localization with Attributes (F2LA) dataset and manually annotate more than 10
attributes per face, including facial localization information. Utilizing the
extensive annotations from F2LA, an experimental setup is designed to study the
performance of four pre-trained face detectors. We observe (i) a high disparity
in detection accuracies across gender and skin-tone, and (ii) interplay of
confounding factors beyond demography. The F2LA data and associated annotations
can be accessed at http://iab-rubric.org/index.php/F2LA.Comment: Accepted in FG 202
On Learning Deep Models with Imbalanced Data Distribution
The availability of large training data has led to the development of sophisticated deep learning algorithms to achieve state-of-the-art performance on various tasks and several applications have been benefited immensely. Despite the unparalleled success, the performance of deep learning algorithms depends significantly on the training data distribution. An imbalance in training data distribution affects the performance of deep models. Our research focuses on designing and developing solutions for different real-world problems, specifically related to facial analytic tasks, with imbalanced data distribution. These problems include injured face recognition, fake image detection, and estimation and mitigation of bias in model prediction
Anatomizing Bias in Facial Analysis
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups. Due to its impact on society, it has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals. This has led to research in the identification and mitigation of bias in AI systems. In this paper, we encapsulate bias detection/estimation and mitigation algorithms for facial analysis. Our main contributions include a systematic review of algorithms proposed for understanding bias, along with a taxonomy and extensive overview of existing bias mitigation algorithms. We also discuss open challenges in the field of biased facial analysis